Rates of superlinear convergence for classical quasi-Newton methods
نویسندگان
چکیده
We study the local convergence of classical quasi-Newton methods for nonlinear optimization. Although it was well established a long time ago that asymptotically these converge superlinearly, corresponding rates still remain unknown. In this paper, we address problem. obtain first explicit non-asymptotic superlinear standard methods, which are based on updating formulas from convex Broyden class. particular, well-known DFP and BFGS form $(\frac{n L^2}{\mu^2 k})^{k/2}$ L}{\mu respectively, where $k$ is iteration counter, $n$ dimension problem, $\mu$ strong convexity parameter, $L$ Lipschitz constant gradient.
منابع مشابه
Convergence rates of sub-sampled Newton methods
We consider the problem of minimizing a sum of n functions via projected iterations onto a convex parameter set C ⇢ R, where n p 1. In this regime, algorithms which utilize sub-sampling techniques are known to be effective. In this paper, we use sub-sampling techniques together with low-rank approximation to design a new randomized batch algorithm which possesses comparable convergence rate to ...
متن کاملDamped techniques for enforcing convergence of quasi-Newton methods
Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opin...
متن کاملQuasi-Newton Methods for Nonconvex Constrained Multiobjective Optimization
Here, a quasi-Newton algorithm for constrained multiobjective optimization is proposed. Under suitable assumptions, global convergence of the algorithm is established.
متن کاملOn the convergence of quasi-Newton methods for nonsmooth problems
We develop a theory of quasi-Newton and least-change update methods for solving systems of nonlinear equations F (x) = 0. In this theory, no diierentiability conditions are necessary. Instead, we assume that F can be approximated, in a weak sense, by an aane function in a neighborhood of a solution. Using this assumption, we prove local and ideal convergence. Our theory can be applied to B-diie...
متن کاملProximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates
In [19], a general, inexact, efficient proximal quasi-Newton algorithm for composite optimization problems has been proposed and a sublinear global convergence rate has been established. In this paper, we analyze the convergence properties of this method, both in the exact and inexact setting, in the case when the objective function is strongly convex. We also investigate a practical variant of...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Mathematical Programming
سال: 2021
ISSN: ['0025-5610', '1436-4646']
DOI: https://doi.org/10.1007/s10107-021-01622-5